Goto

Collaborating Authors

 morphological perceptron


Morphological Perceptron with Competitive Layer: Training Using Convex-Concave Procedure

Cunha, Iara, Valle, Marcos Eduardo

arXiv.org Artificial Intelligence

A morphological perceptron is a multilayer feedforward neural network in which neurons perform elementary operations from mathematical morphology. For multiclass classification tasks, a morphological perceptron with a competitive layer (MPCL) is obtained by integrating a winner-take-all output layer into the standard morphological architecture. The non-differentiability of morphological operators renders gradient-based optimization methods unsuitable for training such networks. Consequently, alternative strategies that do not depend on gradient information are commonly adopted. This paper proposes the use of the convex-concave procedure (CCP) for training MPCL networks. The training problem is formulated as a difference of convex (DC) functions and solved iteratively using CCP, resulting in a sequence of linear programming subproblems. Computational experiments demonstrate the effectiveness of the proposed training method in addressing classification tasks with MPCL networks.


Training Single-Layer Morphological Perceptron Using Convex-Concave Programming

Cunha, Iara, Valle, Marcos Eduardo

arXiv.org Artificial Intelligence

This paper concerns the training of a single-layer morphological perceptron using disciplined convex-concave programming (DCCP). We introduce an algorithm referred to as K-DDCCP, which combines the existing single-layer morphological perceptron (SLMP) model proposed by Ritter and Urcid with the weighted disciplined convex-concave programming (WDCCP) algorithm by Charisopoulos and Maragos. The proposed training algorithm leverages the disciplined convex-concave procedure (DCCP) and formulates a non-convex optimization problem for binary classification. To tackle this problem, the constraints are expressed as differences of convex functions, enabling the application of the DCCP package. The experimental results confirm the effectiveness of the K-DDCCP algorithm in solving binary classification problems. Overall, this work contributes to the field of morphological neural networks by proposing an algorithm that extends the capabilities of the SLMP model.


Approximation Capabilities of Neural Networks using Morphological Perceptrons and Generalizations

Chang, William, Hamad, Hassan, Chugg, Keith M.

arXiv.org Artificial Intelligence

Standard artificial neural networks (ANNs) use sum-product or multiply-accumulate node operations with a memoryless nonlinear activation. These neural networks are known to have universal function approximation capabilities. Previously proposed morphological perceptrons use max-sum, in place of sum-product, node processing and have promising properties for circuit implementations. In this paper we show that these max-sum ANNs do not have universal approximation capabilities. Furthermore, we consider proposed signed-max-sum and max-star-sum generalizations of morphological ANNs and show that these variants also do not have universal approximation capabilities. We contrast these variations to log-number system (LNS) implementations which also avoid multiplications, but do exhibit universal approximation capabilities.


Reduced Dilation-Erosion Perceptron for Binary Classification

Valle, Marcos Eduardo

arXiv.org Machine Learning

Dilation and erosion are two elementary operations from mathematical morphology, a non-linear lattice computing methodology widely used for image processing and analysis. The dilation-erosion perceptron (DEP) is a morphological neural network obtained by a convex combination of a dilation and an erosion followed by the application of a hard-limiter function for binary classification tasks. A DEP classifier can be trained using a convex-concave procedure along with the minimization of the hinge loss function. As a lattice computing model, the DEP classifier assumes the feature and class spaces are partially ordered sets. In many practical situations, however, there is no natural ordering for the feature patterns. Using concepts from multi-valued mathematical morphology, this paper introduces the reduced dilation-erosion (r-DEP) classifier. An r-DEP classifier is obtained by endowing the feature space with an appropriate reduced ordering. Such reduced ordering can be determined using two approaches: One based on an ensemble of support vector classifiers (SVCs) with different kernels and the other based on a bagging of similar SVCs trained using different samples of the training set. Using several binary classification datasets from the OpenML repository, the ensemble and bagging r-DEP classifiers yielded in mean higher balanced accuracy scores than the linear, polynomial, and radial basis function (RBF) SVCs as well as their ensemble and a bagging of RBF SVCs.


A Tropical Approach to Neural Networks with Piecewise Linear Activations

Charisopoulos, Vasileios, Maragos, Petros

arXiv.org Machine Learning

Traditional literature on pattern recognition and neural networks utilizes the linear Perceptron, a multiply-accumulate architecture fed into an (optional) activation function introduced by Rosenblatt [40], as the building block of a multitude of complex architectures modelling neural computation. In recent years, multilayered, complex architectures of neural networks have enjoyed an unprecedented growth in popularity, with the introduction of the paradigm of deep learning [4]. An illustrative example of the power of deep learning is Convolutional Neural Networks; although they were the state of the art when they were introduced, two decades ago [24], it wasn't until recently that they were systematically applied to image recognition challenges[23], achieving results comparable to humans (e.g.